Goto

Collaborating Authors

 equipping expert bandit


Equipping Experts/Bandits with Long-term Memory

Neural Information Processing Systems

We propose the first black-box approach to obtaining long-term memory guarantees for online learning in the sense of Bousquet and Warmuth, 2002, by reducing the problem to achieving typical switching regret. Specifically, for the classical expert problem with $K$ actions and $T$ rounds, using our general framework we develop various algorithms with a regret bound of order $\order(\sqrt{T(S\ln T + n \ln K)})$ compared to any sequence of experts with $S-1$ switches among $n \leq \min\{S, K\}$ distinct experts. In addition, by plugging specific adaptive algorithms into our framework we also achieve the best of both stochastic and adversarial environments simultaneously, which resolves an open problem of Warmuth and Koolen 2014. Furthermore, we extend our results to the sparse multi-armed bandit setting and show both negative and positive results for long-term memory guarantees. As a side result, our lower bound also implies that sparse losses do not help improve the worst-case regret for contextual bandit, a sharp contrast with the non-contextual case.


Reviews: Equipping Experts/Bandits with Long-term Memory

Neural Information Processing Systems

The main novelty in the full-information part of the results is the black-box reduction to the confidence-based framework. The main intuitive idea is that confidences are generated for experts based on how good they have looked in the past/how bad regret has been with respect to them so far (Step 7 of Algorithm 1), and the confidence is multiplied by the probabilities generated by the actual expert algorithm *with a static regret guarantee*. The algorithm and analysis provides a conceptual look into how switching regret minimization can be achieved and is interesting. Proof (of Theorem 3) appears essentially correct. These ideas also yield new results for the sparse bandit version of the problem, which has seen recent progress.


Reviews: Equipping Experts/Bandits with Long-term Memory

Neural Information Processing Systems

This paper makes revisits the long-term memory problem (switching within a small set of experts) in the full info and bandit cases, with specific focus on obtaining improved results in the stochastic case. The reviewers are all appreciative of the new black-box reduction proposed in this paper, and the way in which it is applied to the sparse bandit problem. Of course it would be nice to have lower bounds and experiments... Yet the consensus is that the paper makes a sizable contribution as-is. During the review the question was raised whether this arguably niche problem would find sufficient interest at NeurIPS. Past instances of the conference did host papers discussing aspects of this probem, see e.g.


Equipping Experts/Bandits with Long-term Memory

Neural Information Processing Systems

We propose the first black-box approach to obtaining long-term memory guarantees for online learning in the sense of Bousquet and Warmuth, 2002, by reducing the problem to achieving typical switching regret. Specifically, for the classical expert problem with K actions and T rounds, using our general framework we develop various algorithms with a regret bound of order \order(\sqrt{T(S\ln T n \ln K)}) compared to any sequence of experts with S-1 switches among n \leq \min\{S, K\} distinct experts. In addition, by plugging specific adaptive algorithms into our framework we also achieve the best of both stochastic and adversarial environments simultaneously, which resolves an open problem of Warmuth and Koolen 2014. Furthermore, we extend our results to the sparse multi-armed bandit setting and show both negative and positive results for long-term memory guarantees. As a side result, our lower bound also implies that sparse losses do not help improve the worst-case regret for contextual bandit, a sharp contrast with the non-contextual case.


Equipping Experts/Bandits with Long-term Memory

Zheng, Kai, Luo, Haipeng, Diakonikolas, Ilias, Wang, Liwei

Neural Information Processing Systems

We propose the first black-box approach to obtaining long-term memory guarantees for online learning in the sense of Bousquet and Warmuth, 2002, by reducing the problem to achieving typical switching regret. Specifically, for the classical expert problem with $K$ actions and $T$ rounds, using our general framework we develop various algorithms with a regret bound of order $\order(\sqrt{T(S\ln T n \ln K)})$ compared to any sequence of experts with $S-1$ switches among $n \leq \min\{S, K\}$ distinct experts. In addition, by plugging specific adaptive algorithms into our framework we also achieve the best of both stochastic and adversarial environments simultaneously, which resolves an open problem of Warmuth and Koolen 2014. Furthermore, we extend our results to the sparse multi-armed bandit setting and show both negative and positive results for long-term memory guarantees. As a side result, our lower bound also implies that sparse losses do not help improve the worst-case regret for contextual bandit, a sharp contrast with the non-contextual case. Papers published at the Neural Information Processing Systems Conference.